Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32.707
Filtrar
1.
Cereb Cortex ; 34(4)2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38629796

RESUMEN

Neuroimaging studies have shown that the neural representation of imagery is closely related to the perception modality; however, the undeniable different experiences between perception and imagery indicate that there are obvious neural mechanism differences between them, which cannot be explained by the simple theory that imagery is a form of weak perception. Considering the importance of functional integration of brain regions in neural activities, we conducted correlation analysis of neural activity in brain regions jointly activated by auditory imagery and perception, and then brain functional connectivity (FC) networks were obtained with a consistent structure. However, the connection values between the areas in the superior temporal gyrus and the right precentral cortex were significantly higher in auditory perception than in the imagery modality. In addition, the modality decoding based on FC patterns showed that the FC network of auditory imagery and perception can be significantly distinguishable. Subsequently, voxel-level FC analysis further verified the distribution regions of voxels with significant connectivity differences between the 2 modalities. This study complemented the correlation and difference between auditory imagery and perception in terms of brain information interaction, and it provided a new perspective for investigating the neural mechanisms of different modal information representations.


Asunto(s)
Corteza Auditiva , Mapeo Encefálico , Mapeo Encefálico/métodos , Imaginación , Encéfalo/diagnóstico por imagen , Percepción Auditiva , Corteza Cerebral , Imagen por Resonancia Magnética/métodos , Corteza Auditiva/diagnóstico por imagen
2.
Sci Rep ; 14(1): 7764, 2024 04 02.
Artículo en Inglés | MEDLINE | ID: mdl-38565622

RESUMEN

Sound is sensed by the ear but can also be felt on the skin, by means of vibrotactile stimulation. Only little research has addressed perceptual implications of vibrotactile stimulation in the realm of music. Here, we studied which perceptual dimensions of music listening are affected by vibrotactile stimulation and whether the spatial segregation of vibrations improves vibrotactile stimulation. Forty-one listeners were presented with vibrotactile stimuli via a chair's surfaces (left and right arm rests, back rest, seat) in addition to music presented over headphones. Vibrations for each surface were derived from individual tracks of the music (multi condition) or conjointly by a mono-rendering, in addition to incongruent and headphones-only conditions. Listeners evaluated unknown music from popular genres according to valence, arousal, groove, the feeling of being part of a live performance, the feeling of being part of the music, and liking. Results indicated that the multi- and mono vibration conditions robustly enhanced the nature of the musical experience compared to listening via headphones alone. Vibrotactile enhancement was strong in the latent dimension of 'musical engagement', encompassing the sense of being a part of the music, arousal, and groove. These findings highlight the potential of vibrotactile cues for creating intensive musical experiences.


Asunto(s)
Música , Sonido , Vibración , Emociones , Señales (Psicología) , Percepción Auditiva/fisiología
3.
PLoS One ; 19(4): e0300219, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38568916

RESUMEN

Aphantasia is characterised by the inability to create mental images in one's mind. Studies investigating impairments in imagery typically focus on the visual domain. However, it is possible to generate many different forms of imagery including imagined auditory, kinesthetic, tactile, motor, taste and other experiences. Recent studies show that individuals with aphantasia report a lack of imagery in modalities, other than vision, including audition. However, to date, no research has examined whether these reductions in self-reported auditory imagery are associated with decrements in tasks that require auditory imagery. Understanding the extent to which visual and auditory imagery deficits co-occur can help to better characterise the core deficits of aphantasia and provide an alternative perspective on theoretical debates on the extent to which imagery draws on modality-specific or modality-general processes. In the current study, individuals that self-identified as being aphantasic and matched control participants with typical imagery performed two tasks: a musical pitch-based imagery and voice-based categorisation task. The majority of participants with aphantasia self-reported significant deficits in both auditory and visual imagery. However, we did not find a concomitant decrease in performance on tasks which require auditory imagery, either in the full sample or only when considering those participants that reported significant deficits in both domains. These findings are discussed in relation to the mechanisms that might obscure observation of imagery deficits in auditory imagery tasks in people that report reduced auditory imagery.


Asunto(s)
Imágenes en Psicoterapia , Imaginación , Humanos , Autoinforme , Imágenes en Psicoterapia/métodos , Percepción Auditiva
4.
Sci Rep ; 14(1): 7627, 2024 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-38561365

RESUMEN

This study aimed to investigate the effects of reproducing an ultrasonic sound above 20 kHz on the subjective impressions of water sounds using psychological and physiological information obtained by the semantic differential method and electroencephalography (EEG), respectively. The results indicated that the ultrasonic component affected the subjective impression of the water sounds. In addition, regarding the relationship between psychological and physiological aspects, a moderate correlation was confirmed between the EEG change rate and subjective impressions. However, no differences in characteristics were found between with and without the ultrasound component, suggesting that ultrasound does not directly affect the relationship between subjective impressions and EEG energy at the current stage. Furthermore, the correlations calculated for the left and right channels in the occipital region differed significantly, which suggests functional asymmetry for sound perception between the right and left hemispheres.


Asunto(s)
Audición , Sonido , Electroencefalografía/métodos , Percepción Auditiva/fisiología , Estimulación Acústica
5.
Sci Rep ; 14(1): 8814, 2024 04 16.
Artículo en Inglés | MEDLINE | ID: mdl-38627479

RESUMEN

Rhythm perception and synchronisation is musical ability with neural basis defined as the ability to perceive rhythm in music and synchronise body movements with it. The study aimed to check the errors of synchronisation and physiological response as a reaction of the subjects to metrorhythmic stimuli of synchronous and pseudosynchronous stimulation (synchronisation with an externally controlled rhythm, but in reality controlled or produced tone by tapping) Nineteen subjects without diagnosed motor disorders participated in the study. Two tests were performed, where the electromyography signal and reaction time were recorded using the NORAXON system. In addition, physiological signals such as electrodermal activity and blood volume pulse were measured using the Empatica E4. Study 1 consisted of adapting the finger tapping test in pseudosynchrony with a given metrorhythmic stimulus with a selection of preferred, choices of decreasing and increasing tempo. Study 2 consisted of metrorhythmic synchronisation during the heel stomping test. Numerous correlations and statistically significant parameters were found between the response of the subjects with respect to their musical education, musical and sports activities. Most of the differentiating characteristics shown evidence of some group division in the undertaking of musical activities. The use of detailed analyses of synchronisation errors can contribute to the development of methods to improve the rehabilitation process of subjects with motor dysfunction, and this will contribute to the development of an expert system that considers personalised musical preferences.


Asunto(s)
Música , Deportes , Humanos , Movimiento/fisiología , Tiempo de Reacción , Percepción Auditiva/fisiología , Estimulación Acústica
6.
Sci Rep ; 14(1): 8739, 2024 04 16.
Artículo en Inglés | MEDLINE | ID: mdl-38627572

RESUMEN

Inspired by recent findings in the visual domain, we investigated whether the stimulus-evoked pupil dilation reflects temporal statistical regularities in sequences of auditory stimuli. We conducted two preregistered pupillometry experiments (experiment 1, n = 30, 21 females; experiment 2, n = 31, 22 females). In both experiments, human participants listened to sequences of spoken vowels in two conditions. In the first condition, the stimuli were presented in a random order and, in the second condition, the same stimuli were presented in a sequence structured in pairs. The second experiment replicated the first experiment with a modified timing and number of stimuli presented and without participants being informed about any sequence structure. The sound-evoked pupil dilation during a subsequent familiarity task indicated that participants learned the auditory vowel pairs of the structured condition. However, pupil diameter during the structured sequence did not differ according to the statistical regularity of the pair structure. This contrasts with similar visual studies, emphasizing the susceptibility of pupil effects during statistically structured sequences to experimental design settings in the auditory domain. In sum, our findings suggest that pupil diameter may serve as an indicator of sound pair familiarity but does not invariably respond to task-irrelevant transition probabilities of auditory sequences.


Asunto(s)
Pupila , Sonido , Femenino , Humanos , Pupila/fisiología , Reconocimiento en Psicología , Percepción Auditiva/fisiología
7.
Nat Commun ; 15(1): 3093, 2024 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-38600118

RESUMEN

Sensory-motor interactions in the auditory system play an important role in vocal self-monitoring and control. These result from top-down corollary discharges, relaying predictions about vocal timing and acoustics. Recent evidence suggests such signals may be two distinct processes, one suppressing neural activity during vocalization and another enhancing sensitivity to sensory feedback, rather than a single mechanism. Single-neuron recordings have been unable to disambiguate due to overlap of motor signals with sensory inputs. Here, we sought to disentangle these processes in marmoset auditory cortex during production of multi-phrased 'twitter' vocalizations. Temporal responses revealed two timescales of vocal suppression: temporally-precise phasic suppression during phrases and sustained tonic suppression. Both components were present within individual neurons, however, phasic suppression presented broadly regardless of frequency tuning (gating), while tonic was selective for vocal frequencies and feedback (prediction). This suggests that auditory cortex is modulated by concurrent corollary discharges during vocalization, with different computational mechanisms.


Asunto(s)
Corteza Auditiva , Animales , Corteza Auditiva/fisiología , Neuronas/fisiología , Retroalimentación Sensorial/fisiología , Retroalimentación , Callithrix/fisiología , Vocalización Animal/fisiología , Percepción Auditiva/fisiología , Estimulación Acústica
8.
Neuroreport ; 35(6): 399-405, 2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38526973

RESUMEN

In tonal languages, tone perception involves the processing of both acoustic and phonological information conveyed by tonal signals. In Mandarin, in addition to four canonical full tones, there exists a group of weak syllables known as neutral tones. This study aims to investigate the impact of lexical frequency effects and prosodic information associated with neutral tones on the auditory representation of Mandarin compounds. We initially selected disyllabic compounds as targets, manipulating their lexical frequencies and prosodic structures. Subsequently, these target compounds were embedded into selected sentences and auditorily presented to native speakers. During the experiments, participants engaged in lexical decision tasks while their event-related potentials were recorded. The results showed that the auditory lexical representation of disyllabic compounds was modulated by lexical frequency effects. Rare compounds and compounds with rare first constituents elicited larger N400 effects compared to frequent compounds. Furthermore, neutral tones were found to play a role in the processing, resulting in larger N400 effects. Our findings showed significantly increased amplitudes of the N400 component, suggesting that the processing of rare compounds and compounds with neutral tones may require more cognitive resources. Additionally, we observed an interaction effect between lexical frequency and neutral tones, indicating that they could serve as determining cues in the auditory processing of disyllabic compounds.


Asunto(s)
Electroencefalografía , Percepción del Habla , Humanos , Masculino , Femenino , Potenciales Evocados , Lenguaje , Percepción Auditiva , Percepción de la Altura Tonal
9.
J Exp Child Psychol ; 242: 105897, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38461557

RESUMEN

Previous studies have widely demonstrated that individuals with attention-deficit/hyperactivity disorder (ADHD) exhibit deficits in conflict control tasks. However, there is limited evidence regarding the performance of children with ADHD in cross-modal conflict processing tasks. The current study aimed to investigate whether children with ADHD have poor conflict control, which has an impact on sensory dominance effects at different levels of information processing under the influence of visual similarity. A total of 82 children aged 7 to 14 years, including 41 children with ADHD and 41 age- and sex-matched typically developing (TD) children, were recruited. We used the 2:1 mapping paradigm to separate levels of conflict, and the congruency of the audiovisual stimuli was divided into three conditions. In C trials, the target stimulus and the distractor stimulus were identical, and the bimodal stimuli corresponded to the same response keys. In PRIC trials, the distractor stimulus differed from the target stimulus and did not correspond to any response keys. In RIC trials, the distractor stimulus differed from the target stimulus, and the bimodal stimuli corresponded to different response keys. Therefore, we explicitly differentiated cross-modal conflict into a preresponse level (PRIC > C), corresponding to the encoding process, and a response level (RIC > PRIC), corresponding to the response selection process. Our results suggested that auditory distractors caused more interference during visual processing than visual distractors caused during auditory processing (i.e., typical auditory dominance) at the preresponse level regardless of group. However, visual dominance effects were observed in the ADHD group, whereas no visual dominance effects were observed in the TD group at the response level. A possible explanation is that the increased interference effects due to visual similarity and children with ADHD made it more difficult to control conflict when simultaneously confronted with incongruent visual and auditory inputs. The current study highlights how children with ADHD process cross-modal conflicts at multiple levels of information processing, thereby shedding light on the mechanisms underlying ADHD.


Asunto(s)
Trastorno por Déficit de Atención con Hiperactividad , Niño , Humanos , Percepción Visual/fisiología , Percepción Auditiva/fisiología
10.
JASA Express Lett ; 4(3)2024 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-38526127

RESUMEN

Listeners performed two different tasks in which they remembered short sequences comprising either complex tones (generally heard as one melody) or everyday sounds (generally heard as separate objects). In one, listeners judged whether a probe item had been present in the preceding sequence. In the other, they judged whether a second sequence of the same items was identical in order to the preceding sequence. Performance on the first task was higher for everyday sounds; performance on the second was higher for complex tones. Perceptual organization strongly shapes listeners' memory for sounds, with implications for real-world communication.


Asunto(s)
Percepción Auditiva , Memoria a Corto Plazo , Sonido , Audición , Comunicación
11.
Neuroimage ; 291: 120582, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38521212

RESUMEN

In the field of learning theory and practice, the superior efficacy of multisensory learning over uni-sensory is well-accepted. However, the underlying neural mechanisms at the macro-level of the human brain remain largely unexplored. This study addresses this gap by providing novel empirical evidence and a theoretical framework for understanding the superiority of multisensory learning. Through a cognitive, behavioral, and electroencephalographic assessment of carefully controlled uni-sensory and multisensory training interventions, our study uncovers a fundamental distinction in their neuroplastic patterns. A multilayered network analysis of pre- and post- training EEG data allowed us to model connectivity within and across different frequency bands at the cortical level. Pre-training EEG analysis unveils a complex network of distributed sources communicating through cross-frequency coupling, while comparison of pre- and post-training EEG data demonstrates significant differences in the reorganizational patterns of uni-sensory and multisensory learning. Uni-sensory training primarily modifies cross-frequency coupling between lower and higher frequencies, whereas multisensory training induces changes within the beta band in a more focused network, implying the development of a unified representation of audiovisual stimuli. In combination with behavioural and cognitive findings this suggests that, multisensory learning benefits from an automatic top-down transfer of training, while uni-sensory training relies mainly on limited bottom-up generalization. Our findings offer a compelling theoretical framework for understanding the advantage of multisensory learning.


Asunto(s)
Encéfalo , Aprendizaje , Humanos , Plasticidad Neuronal , Percepción Auditiva , Percepción Visual
12.
Int J Pediatr Otorhinolaryngol ; 179: 111908, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38461681

RESUMEN

OBJECTIVES: The aim of this study is to develop a mobile auditory training application based on gaming for children aged 3-5 years using cochlear implants and to evaluate its usability. METHODS: 4 games were developed in the scope of the application World of Sounds; the crucible sound for auditory awareness, mole hunting for auditory discrimination, find the sound for auditory recognition, and choo-choo for auditory comprehension. The prototype was applied to 20 children with normal hearing and 20 children with cochlear implants, all of whom were aged 3-5. The participants were asked to fill out the Game Evaluation Form for Children. Moreover, 40 parents were included in the study, and the Evaluation Form for the Application was applied. RESULTS: According to the form, at least 80% of children using cochlear implants, and all children in the healthy group, responded well to the usability factors. All factors were obtained as highly useable by parents of the children using cochlear implants. The results indicated that in the healthy group, the usefulness and motivation factors were above moderate, and the other factors were highly useable. In the mole-hunting game, there was no significant difference between the groups in the easy level of the first sub-section (p > 0.05). There was a significant difference between the groups in terms of the other sub-sections of the mole-hunting game and all sub-sections of the crucible sound, find the sound, and the choo-choo games (p < 0.05). While there was no correlation between duration of cochlear implant use and ADSI scores and the third sub-section of the crucible sound game (p > 0.05); a correlation was found in the other sub-sections of crucible sound and all sub-sections of the mole hunting, find the sound, and Choo-Choo games (p < 0.05). CONCLUSION: It is thought that the application World of Sounds can serve as an accessible option to support traditional auditory rehabilitation for children with cochlear implants.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Sordera , Pérdida Auditiva Sensorineural , Percepción del Habla , Niño , Humanos , Percepción Auditiva , Pérdida Auditiva Sensorineural/rehabilitación , Sordera/rehabilitación
13.
Dev Neurobiol ; 84(2): 47-58, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38466218

RESUMEN

In sexually dimorphic zebra finches (Taeniopygia guttata), only males learn to sing their father's song, whereas females learn to recognize the songs of their father or mate but cannot sing themselves. Memory of learned songs is behaviorally expressed in females by preferring familiar songs over unfamiliar ones. Auditory association regions such as the caudomedial mesopallium (CMM; or caudal mesopallium) have been shown to be key nodes in a network that supports preferences for learned songs in adult females. However, much less is known about how song preferences develop during the sensitive period of learning in juvenile female zebra finches. In this study, we used blood-oxygen level-dependent (BOLD) functional magnetic resonance imaging (fMRI) to trace the development of a memory-based preference for the father's song in female zebra finches. Using BOLD fMRI, we found that only in adult female zebra finches with a preference for learned song over novel conspecific song, neural selectivity for the father's song was localized in the thalamus (dorsolateral nucleus of the medial thalamus; part of the anterior forebrain pathway, AFP) and in CMM. These brain regions also showed a selective response in juvenile female zebra finches, although activation was less prominent. These data reveal that neural responses in CMM, and perhaps also in the AFP, are shaped during development to support behavioral preferences for learned songs.


Asunto(s)
Pinzones , Vocalización Animal , Masculino , Animales , Femenino , Vocalización Animal/fisiología , alfa-Fetoproteínas/metabolismo , Pinzones/metabolismo , Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Prosencéfalo/metabolismo , Imagen por Resonancia Magnética/métodos
14.
Commun Biol ; 7(1): 317, 2024 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-38480875

RESUMEN

Primate communication relies on multimodal cues, such as vision and audition, to facilitate the exchange of intentions, enable social interactions, avoid predators, and foster group cohesion during daily activities. Understanding the integration of facial and vocal signals is pivotal to comprehend social interaction. In this study, we acquire whole-brain ultra-high field (9.4 T) fMRI data from awake marmosets (Callithrix jacchus) to explore brain responses to unimodal and combined facial and vocal stimuli. Our findings reveal that the multisensory condition not only intensifies activations in the occipito-temporal face patches and auditory voice patches but also engages a more extensive network that includes additional parietal, prefrontal and cingulate areas, compared to the summed responses of the unimodal conditions. By uncovering the neural network underlying multisensory audiovisual integration in marmosets, this study highlights the efficiency and adaptability of the marmoset brain in processing facial and vocal social signals, providing significant insights into primate social communication.


Asunto(s)
Callithrix , Imagen por Resonancia Magnética , Animales , Callithrix/fisiología , Visión Ocular , Mapeo Encefálico , Percepción Auditiva/fisiología
15.
Otol Neurotol ; 45(4): e297-e306, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38437807

RESUMEN

OBJECTIVE: This study aimed to investigate the influence of the number of channels and channel interaction on timbre perception in cochlear implant (CI) processing. By utilizing vocoder simulations of CI processing, the effects of different numbers of channels and channel interaction were examined to assess their impact on timbre perception, an essential aspect of music and auditory performance. STUDY DESIGN, SETTING, AND PATIENTS: Fourteen CI recipients, with at least 1 year of CI device use, and two groups (N = 16 and N = 19) of normal hearing (NH) participants completed a timbre recognition (TR) task. NH participants were divided into two groups, with each group being tested on different aspects of the study. The first group underwent testing with varying numbers of channels (8, 12, 16, and 20) to determine an ideal number that closely reflected the TR performance of CI recipients. Subsequently, the second group of NH participants participated in the assessment of channel interaction, utilizing the identified ideal number of 20 channels, with three conditions: low interaction (54 dB/octave), medium interaction (24 dB/octave), and high interaction (12 dB/octave). Statistical analyses, including repeated-measures analysis of variance and pairwise comparisons, were conducted to examine the effects. RESULTS: The number of channels did not demonstrate a statistically significant effect on TR in NH participants ( p > 0.05). However, it was observed that the condition with 20 channels closely resembled the TR performance of CI recipients. In contrast, channel interaction exhibited a significant effect ( p < 0.001) on TR. Both the low interaction (54 dB/octave) and high interaction (12 dB/octave) conditions differed significantly from the actual CI recipients' performance. CONCLUSION: Timbre perception, a complex ability reliant on highly detailed spectral resolution, was not significantly influenced by the number of channels. However, channel interaction emerged as a significant factor affecting timbre perception. The differences observed under different channel interaction conditions suggest potential mechanisms, including reduced spectro-temporal resolution and degraded spectral cues. These findings highlight the importance of considering channel interaction and optimizing CI processing strategies to enhance music perception and overall auditory performance for CI recipients.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Música , Percepción del Habla , Humanos , Percepción Auditiva , Implantación Coclear/métodos , Pruebas Auditivas/métodos
16.
Elife ; 122024 Mar 12.
Artículo en Inglés | MEDLINE | ID: mdl-38470243

RESUMEN

Preserved communication abilities promote healthy ageing. To this end, the age-typical loss of sensory acuity might in part be compensated for by an individual's preserved attentional neural filtering. Is such a compensatory brain-behaviour link longitudinally stable? Can it predict individual change in listening behaviour? We here show that individual listening behaviour and neural filtering ability follow largely independent developmental trajectories modelling electroencephalographic and behavioural data of N = 105 ageing individuals (39-82 y). First, despite the expected decline in hearing-threshold-derived sensory acuity, listening-task performance proved stable over 2 y. Second, neural filtering and behaviour were correlated only within each separate measurement timepoint (T1, T2). Longitudinally, however, our results raise caution on attention-guided neural filtering metrics as predictors of individual trajectories in listening behaviour: neither neural filtering at T1 nor its 2-year change could predict individual 2-year behavioural change, under a combination of modelling strategies.


Humans are social animals. Communicating with other humans is vital for our social wellbeing, and having strong connections with others has been associated with healthier aging. For most humans, speech is an integral part of communication, but speech comprehension can be challenging in everyday social settings: imagine trying to follow a conversation in a crowded restaurant or decipher an announcement in a busy train station. Noisy environments are particularly difficult to navigate for older individuals, since age-related hearing loss can impact the ability to detect and distinguish speech sounds. Some aging individuals cope better than others with this problem, but the reason why, and how listening success can change over a lifetime, is poorly understood. One of the mechanisms involved in the segregation of speech from other sounds depends on the brain applying a 'neural filter' to auditory signals. The brain does this by aligning the activity of neurons in a part of the brain that deals with sounds, the auditory cortex, with fluctuations in the speech signal of interest. This neural 'speech tracking' can help the brain better encode the speech signals that a person is listening to. Tune and Obleser wanted to know whether the accuracy with which individuals can implement this filtering strategy represents a marker of listening success. Further, the researchers wanted to answer whether differences in the strength of the neural filtering observed between aging listeners could predict how their listening ability would develop, and determine whether these neural changes were connected with changes in people's behaviours. To answer these questions, Tune and Obleser used data collected from a group of healthy middle-aged and older listeners twice, two years apart. They then built mathematical models using these data to investigate how differences between individuals in the brain and in behaviours relate to each other. The researchers found that, across both timepoints, individuals with stronger neural filtering were better at distinguishing speech and listening. However, neural filtering strength measured at the first timepoint was not a good predictor of how well individuals would be able to listen two years later. Indeed, changes at the brain and the behavioural level occurred independently of each other. Tune and Obleser's findings will be relevant to neuroscientists, as well as to psychologists and audiologists whose goal is to understand differences between individuals in terms of listening success. The results suggest that neural filtering guided by attention to speech is an important readout of an individual's attention state. However, the results also caution against explaining listening performance based solely on neural factors, given that listening behaviours and neural filtering follow independent trajectories.


Asunto(s)
Envejecimiento , Longevidad , Adulto , Humanos , Encéfalo , Percepción Auditiva , Benchmarking
17.
Cereb Cortex ; 34(3)2024 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-38466116

RESUMEN

Sound frequency and duration are essential auditory components. The brain perceives deviations from the preceding sound context as prediction errors, allowing efficient reactions to the environment. Additionally, prediction error response to duration change is reduced in the initial stages of psychotic disorders. To compare the spatiotemporal profiles of responses to prediction errors, we conducted a human electrocorticography study with special attention to high gamma power in 13 participants who completed both frequency and duration oddball tasks. Remarkable activation in the bilateral superior temporal gyri in both the frequency and duration oddball tasks were observed, suggesting their association with prediction errors. However, the response to deviant stimuli in duration oddball task exhibited a second peak, which resulted in a bimodal response. Furthermore, deviant stimuli in frequency oddball task elicited a significant response in the inferior frontal gyrus that was not observed in duration oddball task. These spatiotemporal differences within the Parasylvian cortical network could account for our efficient reactions to changes in sound properties. The findings of this study may contribute to unveiling auditory processing and elucidating the pathophysiology of psychiatric disorders.


Asunto(s)
Encéfalo , Electrocorticografía , Humanos , Corteza Prefrontal , Sonido , Percepción Auditiva
18.
eNeuro ; 11(3)2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38467426

RESUMEN

Auditory perception can be significantly disrupted by noise. To discriminate sounds from noise, auditory scene analysis (ASA) extracts the functionally relevant sounds from acoustic input. The zebra finch communicates in noisy environments. Neurons in their secondary auditory pallial cortex (caudomedial nidopallium, NCM) can encode song from background chorus, or scenes, and this capacity may aid behavioral ASA. Furthermore, song processing is modulated by the rapid synthesis of neuroestrogens when hearing conspecific song. To examine whether neuroestrogens support neural and behavioral ASA in both sexes, we retrodialyzed fadrozole (aromatase inhibitor, FAD) and recorded in vivo awake extracellular NCM responses to songs and scenes. We found that FAD affected neural encoding of songs by decreasing responsiveness and timing reliability in inhibitory (narrow-spiking), but not in excitatory (broad-spiking) neurons. Congruently, FAD decreased neural encoding of songs in scenes for both cell types, particularly in females. Behaviorally, we trained birds using operant conditioning and tested their ability to detect songs in scenes after administering FAD orally or injected bilaterally into NCM. Oral FAD increased response bias and decreased correct rejections in females, but not in males. FAD in NCM did not affect performance. Thus, FAD in the NCM impaired neuronal ASA but that did not lead to behavioral disruption suggesting the existence of resilience or compensatory responses. Moreover, impaired performance after systemic FAD suggests involvement of other aromatase-rich networks outside the auditory pathway in ASA. This work highlights how transient estrogen synthesis disruption can modulate higher-order processing in an animal model of vocal communication.


Asunto(s)
Corteza Auditiva , Pinzones , Femenino , Animales , Masculino , Pinzones/fisiología , Aromatasa , Reproducibilidad de los Resultados , Vocalización Animal/fisiología , Estimulación Acústica , Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Corteza Auditiva/fisiología
19.
Hum Brain Mapp ; 45(4): e26653, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38488460

RESUMEN

Face-to-face communication relies on the integration of acoustic speech signals with the corresponding facial articulations. In the McGurk illusion, an auditory /ba/ phoneme presented simultaneously with a facial articulation of a /ga/ (i.e., viseme), is typically fused into an illusory 'da' percept. Despite its widespread use as an index of audiovisual speech integration, critics argue that it arises from perceptual processes that differ categorically from natural speech recognition. Conversely, Bayesian theoretical frameworks suggest that both the illusory McGurk and the veridical audiovisual congruent speech percepts result from probabilistic inference based on noisy sensory signals. According to these models, the inter-sensory conflict in McGurk stimuli may only increase observers' perceptual uncertainty. This functional magnetic resonance imaging (fMRI) study presented participants (20 male and 24 female) with audiovisual congruent, McGurk (i.e., auditory /ba/ + visual /ga/), and incongruent (i.e., auditory /ga/ + visual /ba/) stimuli along with their unisensory counterparts in a syllable categorization task. Behaviorally, observers' response entropy was greater for McGurk compared to congruent audiovisual stimuli. At the neural level, McGurk stimuli increased activations in a widespread neural system, extending from the inferior frontal sulci (IFS) to the pre-supplementary motor area (pre-SMA) and insulae, typically involved in cognitive control processes. Crucially, in line with Bayesian theories these activation increases were fully accounted for by observers' perceptual uncertainty as measured by their response entropy. Our findings suggest that McGurk and congruent speech processing rely on shared neural mechanisms, thereby supporting the McGurk illusion as a valid measure of natural audiovisual speech perception.


Asunto(s)
Ilusiones , Percepción del Habla , Humanos , Masculino , Femenino , Percepción Auditiva/fisiología , Habla/fisiología , Ilusiones/fisiología , Percepción Visual/fisiología , Teorema de Bayes , Incertidumbre , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Estimulación Luminosa/métodos
20.
Curr Biol ; 34(6): R229-R231, 2024 Mar 25.
Artículo en Inglés | MEDLINE | ID: mdl-38531310

RESUMEN

Zhu et al. present an alternative explanation for the weaker multisensory illusions in football goalkeepers compared with outfielders and non-athletes, showing that better unisensory precision in goalkeepers can also account for this effect.


Asunto(s)
Percepción Auditiva , Ilusiones , Humanos , Percepción Visual , Estimulación Acústica , Tiempo de Reacción , Estimulación Luminosa
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...